Introduction

This report describes the results of a preregistered study available at: https://osf.io/w46r9.


Note also that this data has been cleaned beforehand. Five datasets were merged (joined) through an inner join—3 Qualtrics surveys and 2 Inquisit tasks—so as to keep only participants who at least participated at each step of the study. Missing data will be imputed later on. Duplicates were addressed with the rempsyc::best_duplicate function, which keeps the duplicate with the least amount of missing values, and in case of ties, takes the first occurrence.

Packages & Data

Packages

library(rempsyc)
library(dplyr)
library(interactions)
library(performance)
library(see)
library(report)
library(datawizard)
library(bestNormalize)
library(psych)
library(visdat)
library(missForest)
library(doParallel)

summary(report(sessionInfo()))

The analysis was done using the R Statistical language (v4.2.2; R Core Team, 2022) on Windows 10 x64, using the packages iterators (v1.0.14), doParallel (v1.0.17), interactions (v1.1.5), performance (v0.10.2.5), see (v0.7.4.3), report (v0.5.6), foreach (v1.5.2), datawizard (v0.6.5.15), bestNormalize (v1.9.0), psych (v2.2.9), missForest (v1.5), rempsyc (v0.1.1.2), visdat (v0.5.3) and dplyr (v1.1.0).

Data

# Read data
data <- read.table("data/fulldataset.txt", sep = "\t", header = TRUE)

# Code group variable as factor
data <- data %>% 
  mutate(condition_dum = ifelse(condition == "Mindfulness", 1, 0),
         condition = as.factor(condition))
# Dummy variable (instead of factor) is required by the `interact_plot()` function...

cat(report_participants(data, threshold = 1))

475 participants (Gender: 57.5% women, 40.8% men, 1.26% non-binary, 0.42% missing; Country: 99.16% United States of America, 0.84% other; Race: 77.26% White, 11.79% Black or African American, 4.21% Asian, 3.16% Mixed, 1.26% American Indian or Alaska Native, 2.32% other)

# Allocation ratio
report(data$condition)

x: 2 levels, namely Control (n = 236, 49.68%) and Mindfulness (n = 239, 50.32%)

Preparation

At this stage, we define a list of our relevant variables.

# Make list of DVs
col.list <- c("blastintensity", "blastduration", "blastintensity.duration",
              "KIMS", "BSCS", "BAQ", "SHS", "SHS.mean", "SHS.aggravation",
              "PANAS_pos", "PANAS_neg", "IAT", "SOPT")

Data cleaning

In this section, we are preparing the data for analysis: (a) taking care of preliminary exclusions, (b) checking for and exploring missing values, (d) imputing missing data with missForest, (e) computing scale means, and (f) extracting reliability indices for our scales.

Preliminary exclusions

First, we only want to keep those who agreed to keep their participation in the study after the debriefing.

data %>% 
  count(debriefing)
debriefing n
Yes, I accept to maintain my participation to this study. 473
NA 2

Nobody to exclude based on consent (nobody asked to be removed).

Second, we know that we only want to keep participants who had at least an 80% success rate in the critical experimental manipulation task. Let’s see how many participants have less than an 80% success rate. Those with missing values for variable manipsuccessleft will also be excluded since they have not completed the critical experimental manipulation in this study.

data %>% 
    summarize(success.80 = sum(manipsuccessleft < .80, 
                               na.rm = TRUE),
              is.na = sum(is.na(manipsuccessleft)))
success.80 is.na
34 0

There’s 34 people with success smaller than 80%, let’s exclude them.

data <- data %>% 
    filter(manipsuccessleft >= .80)
cat(report_participants(data, threshold = 1))

441 participants (Gender: 58.0% women, 40.1% men, 1.36% non-binary, 0.45% missing; Country: 99.09% United States of America, 0.91% other; Race: 77.32% White, 11.11% Black or African American, 4.31% Asian, 3.40% Mixed, 1.36% American Indian or Alaska Native, 2.49% other)

Let’s also exclude those who failed 2 or more attention checks (i.e., keep with those with a score of two or more).

data <- data %>% 
    mutate(att_check = rowSums(
      select(., att_check1, att_check2, att_check3)))

data %>% 
  count(att_check)
att_check n
0 2
1 3
2 18
3 416
NA 2

There’s 5 more exclusions here.

data <- data %>% 
  filter(att_check >= 2)

cat(report_participants(data, threshold = 1))

434 participants (Gender: 58.3% women, 40.3% men, 1.38% non-binary; Country: 99.54% United States of America, 0.46% other; Race: 77.65% White, 11.29% Black or African American, 4.15% Asian, 3.46% Mixed, 1.38% American Indian or Alaska Native, 2.07% other)

Explore missing data

Missing items

# Check for nice_na
nice_na(data, scales = c("BSCS", "BAQ", "KIMS"))
var items na cells na_percent na_max na_max_percent all_na
BSCS_1:BSCS_7 7 0 3038 0.00 0 0.00 0
BAQ_1:BAQ_12 12 0 5208 0.00 0 0.00 0
KIMS_1:KIMS_39 39 0 16926 0.00 0 0.00 0
Total 276 35587 119784 29.71 83 30.07 0

No missing data for our scales of interest, yeah! Except 2 items for KIMS.

Patterns of missing data

Let’s check for patterns of missing data.

# Smaller subset of data for easier inspection
data %>%
  select(manualworkerId:att_check) %>%
  vis_miss

Little’s MCAR test

# Let's use Little's MCAR test to confirm
# We have to proceed by "scale" because the function can only
# support 30 variables max at a time
library(naniar)
data %>% 
  select(BSCS_1:BSCS_7) %>% 
  mcar_test
statistic df p.value missing.patterns
0 0 0 1
# a p-value of 0 means the test failed because there's no missing values.

data %>% 
  select(BAQ_1:BAQ_12) %>% 
  mcar_test
statistic df p.value missing.patterns
0 0 0 1
# a p-value of 0 means the test failed because there's no missing values.

data %>% 
  select(KIMS_1:KIMS_20) %>% 
  mcar_test
statistic df p.value missing.patterns
0 0 1 1
# a p-value of 0 means the test failed because there's no missing values.

data %>% 
  select(KIMS_21:KIMS_39) %>% 
  mcar_test
statistic df p.value missing.patterns
0 0 0 1
# a p-value of 0 means the test failed because there's no missing values.

Impute missing data

Here, we impute missing data with the missForest package, as it is one of the best imputation methods.

Imputation

# Need character variables as factors
# "Error: Can not handle categorical predictors with more than 53 categories."
# So we have to temporarily remove IDs also...
new.data <- data %>% 
  select(-c(manualworkerId, embeddedworkerId,
            att_check1, att_check2, att_check3)) %>% 
  mutate(across(where(is.character), as.factor))

# Parallel processing
registerDoParallel(cores = 4)

# Variables
set.seed(100)
data.imp <- missForest(new.data, verbose = TRUE, parallelize = "variables")
##   removed variable(s) 127 129 137 151 154 158 163 173 176 183 193 203 208 264 due to the missingness of all entries
##   parallelizing over the variables of the input data matrix 'xmis'
##   missForest iteration 1 in progress...done!
##     estimated error(s): NaN 0.1445299 
##     difference(s): 0.00002129105 0.08262014 
##     time: 12.52 seconds
## 
##   missForest iteration 2 in progress...done!
##     estimated error(s): NaN 0.1445299 
##     difference(s): 0.00007110337 0.08689928 
##     time: 12.8 seconds
# Total time is 2 sec (4*0.5) - 4 cores

# Extract imputed dataset
new.data <- data.imp$ximp

There are some variables we don’t actually want to impute, like country. We want to keep those NAs in that case. Let’s add them back. We also want to add ID back.

# Add ID
new.data <- bind_cols(manualworkerId = data$manualworkerId, new.data)

# Add back the NAs in country, attention checks, etc.
data <- new.data %>% 
  mutate(country.ip = data$country.ip,
         gender = data$gender,
         att_check1 = data$att_check1, 
         att_check2 = data$att_check2,
         att_check3 = data$att_check3)

Details

Why impute the data? van Ginkel explains,

Regardless of the missingness mechanism, multiple imputation is always to be preferred over listwise deletion. Under MCAR it is preferred because it results in more statistical power, under MAR it is preferred because besides more power it will give unbiased results whereas listwise deletion may not, and under NMAR it is also the preferred method because it will give less biased results than listwise deletion.

van Ginkel, J. R., Linting, M., Rippe, R. C. A., & van der Voort, A. (2020). Rebutting existing misconceptions about multiple imputation as a method for handling missing data. Journal of Personality Assessment, 102(3), 297-308. https://doi.org/10.1080/00223891.2018.1530680

Why missForest? It outperforms other imputation methods, including the popular MICE (multiple imputation by chained equations). You also don’t end up with several datasets, which makes it easier for following analyses. Finally, it can be applied to mixed data types (missings in numeric & categorical variables).

Waljee, A. K., Mukherjee, A., Singal, A. G., Zhang, Y., Warren, J., Balis, U., … & Higgins, P. D. (2013). Comparison of imputation methods for missing laboratory data in medicine. BMJ open, 3(8), e002847. https://doi.org/10.1093/bioinformatics/btr597

Stekhoven, D. J., & Bühlmann, P. (2012). MissForest—non-parametric missing value imputation for mixed-type data. Bioinformatics, 28(1), 112-118. https://doi.org/10.1093/bioinformatics/btr597

Scale Means

Now that we have imputed the missing data, we are ready to calculate our scale means.

Trait Self-Control

# Reverse code items 2, 4, 6, 7
data <- data %>% 
  mutate(across(starts_with("BSCS"), .names = "{col}r"))
data <- data %>% 
  mutate(across(c(BSCS_2, BSCS_4, BSCS_6, BSCS_7), ~nice_reverse(.x, 5), .names = "{col}r"))

# Get mean BSCS
data <- data %>% 
  mutate(BSCS = rowMeans(select(., BSCS_1r:BSCS_7r)))

Trait aggression

# Reverse code item 7
data <- data %>% 
  mutate(across(starts_with("BAQ"), .names = "{col}r"))
data <- data %>% 
  mutate(across(BAQ_7, ~nice_reverse(.x, 7), .names = "{col}r"))

# Get sum of BAQ
data <- data %>% 
  mutate(BAQ = rowMeans(select(., BAQ_1r:BAQ_12r)))

Trait Mindfulness

# Reverse code items 3-4, 8, 11-12, 14, 16, 18, 20, 22, 23-24, 27-28, 31-32, 35-36
data <- data %>% 
  mutate(across(starts_with("KIMS"), .names = "{col}r"))
data <- data %>% 
  mutate(across(all_of(paste0("KIMS_", c(3:4, 8, 11:12, 14, 16, 18, 20,
                                         22:24, 27:28, 31:32, 35:36))), 
                ~nice_reverse(.x, 5), .names = "{col}r"))

# Get sum of KIMS
data <- data %>% 
  mutate(KIMS = rowMeans(select(., KIMS_1r:KIMS_39r)))

State Hostility

# labels.part3$SHS_22
# SHS: forgot to add back the two other scales!!!!!
# So no reverse scoring needed.

# Get sum of SHS and subscales
data <- data %>% 
  mutate(SHS = rowMeans(select(., SHS_1:SHS_21)),
         SHS.mean = rowMeans(select(., SHS_1:SHS_14)),
         SHS.aggravation = rowMeans(select(., SHS_14:SHS_21)))

PANAS

# No reverse scoring needed for PANAS.
data <- data %>% 
  mutate(PANAS_pos = rowMeans(select(., paste0("PANAS_", seq(1, 10, 2)))),
         PANAS_neg = rowMeans(select(., paste0("PANAS_", seq(2, 10, 2)))))

Intensity * Duration

# Create new variable blastintensity.duration
data <- data %>% 
  mutate(blastintensity.duration = blastintensity * blastduration)

Reliability

Now that we have reversed our items, we can get the alphas for our different scales.

Trait Self-Control

data %>% 
  select(BSCS_1r:BSCS_7r) %>% 
  omega(nfactors = 1)
## Loading required namespace: GPArotation
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.83 
## G.6:                   0.84 
## Omega Hierarchical:    0.83 
## Omega H asymptotic:    1 
## Omega Total            0.84 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##            g  F1*   h2   u2 p2
## BSCS_1r 0.71      0.51 0.49  1
## BSCS_2r 0.70      0.49 0.51  1
## BSCS_3r 0.58      0.33 0.67  1
## BSCS_4r 0.70      0.49 0.51  1
## BSCS_5r 0.46      0.21 0.79  1
## BSCS_6r 0.76      0.58 0.42  1
## BSCS_7r 0.61      0.37 0.63  1
## 
## With Sums of squares  of:
##   g F1* 
##   3   0 
## 
## general/max  26806166228882828   max/min =   1
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 14  and the fit is  0.48 
## The number of observations was  434  with Chi Square =  207.35  with prob <  0.0000000000000000000000000000000000017
## The root mean square of the residuals is  0.09 
## The df corrected root mean square of the residuals is  0.11
## RMSEA index =  0.178  and the 10 % confidence intervals are  0.158 0.2
## BIC =  122.32
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 14  and the fit is  0.48 
## The number of observations was  434  with Chi Square =  207.35  with prob <  0.0000000000000000000000000000000000017
## The root mean square of the residuals is  0.09 
## The df corrected root mean square of the residuals is  0.11 
## 
## RMSEA index =  0.178  and the 10 % confidence intervals are  0.158 0.2
## BIC =  122.32 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.92   0
## Multiple R square of scores with factors      0.85   0
## Minimum correlation of factor score estimates 0.70  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.84 0.83
## Omega general for total scores and subscales  0.83 0.83
## Omega group for total scores and subscales    0.00 0.00

Trait Aggression

data %>% 
  select(BAQ_1r:BAQ_12r) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.85 
## G.6:                   0.88 
## Omega Hierarchical:    0.85 
## Omega H asymptotic:    1 
## Omega Total            0.85 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##            g  F1*   h2   u2 p2
## BAQ_1r  0.66      0.43 0.57  1
## BAQ_2r  0.57      0.33 0.67  1
## BAQ_3r  0.69      0.48 0.52  1
## BAQ_4r  0.26      0.07 0.93  1
## BAQ_5r  0.49      0.24 0.76  1
## BAQ_6r  0.70      0.50 0.50  1
## BAQ_7r  0.47      0.23 0.77  1
## BAQ_8r  0.70      0.49 0.51  1
## BAQ_9r  0.75      0.57 0.43  1
## BAQ_10r 0.48      0.23 0.77  1
## BAQ_11r 0.49      0.24 0.76  1
## BAQ_12r 0.50      0.25 0.75  1
## 
## With Sums of squares  of:
##   g F1* 
##   4   0 
## 
## general/max  48571401317850552   max/min =   1
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 54  and the fit is  1.62 
## The number of observations was  434  with Chi Square =  693.9  with prob <  0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000063
## The root mean square of the residuals is  0.12 
## The df corrected root mean square of the residuals is  0.13
## RMSEA index =  0.165  and the 10 % confidence intervals are  0.155 0.177
## BIC =  365.96
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 54  and the fit is  1.62 
## The number of observations was  434  with Chi Square =  693.9  with prob <  0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000063
## The root mean square of the residuals is  0.12 
## The df corrected root mean square of the residuals is  0.13 
## 
## RMSEA index =  0.165  and the 10 % confidence intervals are  0.155 0.177
## BIC =  365.96 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.94   0
## Multiple R square of scores with factors      0.88   0
## Minimum correlation of factor score estimates 0.75  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.85 0.85
## Omega general for total scores and subscales  0.85 0.85
## Omega group for total scores and subscales    0.00 0.00

Trait Mindfulness

data %>% 
  select(KIMS_1r:KIMS_39r) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.89 
## G.6:                   0.94 
## Omega Hierarchical:    0.86 
## Omega H asymptotic:    0.96 
## Omega Total            0.9 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##             g  F1*   h2   u2 p2
## KIMS_1r  0.24      0.06 0.94  1
## KIMS_2r  0.69      0.47 0.53  1
## KIMS_3r  0.61      0.37 0.63  1
## KIMS_4r  0.57      0.33 0.67  1
## KIMS_5r  0.30      0.09 0.91  1
## KIMS_6r  0.62      0.38 0.62  1
## KIMS_7r  0.30      0.09 0.91  1
## KIMS_8r            0.01 0.99  1
## KIMS_9r            0.03 0.97  1
## KIMS_10r 0.62      0.39 0.61  1
## KIMS_11r 0.47      0.22 0.78  1
## KIMS_12r 0.59      0.35 0.65  1
## KIMS_13r 0.24      0.06 0.94  1
## KIMS_14r 0.66      0.44 0.56  1
## KIMS_15r 0.43      0.19 0.81  1
## KIMS_16r 0.58      0.33 0.67  1
## KIMS_17r 0.22      0.05 0.95  1
## KIMS_18r 0.68      0.46 0.54  1
## KIMS_19r           0.02 0.98  1
## KIMS_20r 0.30      0.09 0.91  1
## KIMS_21r 0.30      0.09 0.91  1
## KIMS_22r 0.69      0.47 0.53  1
## KIMS_23r 0.61      0.37 0.63  1
## KIMS_24r 0.29      0.08 0.92  1
## KIMS_25r 0.28      0.08 0.92  1
## KIMS_26r 0.48      0.23 0.77  1
## KIMS_27r 0.31      0.10 0.90  1
## KIMS_28r 0.54      0.29 0.71  1
## KIMS_29r 0.29      0.08 0.92  1
## KIMS_30r 0.23      0.05 0.95  1
## KIMS_31r 0.30      0.09 0.91  1
## KIMS_32r 0.62      0.38 0.62  1
## KIMS_33r 0.31      0.09 0.91  1
## KIMS_34r 0.48      0.23 0.77  1
## KIMS_35r 0.51      0.26 0.74  1
## KIMS_36r 0.47      0.22 0.78  1
## KIMS_37r 0.30      0.09 0.91  1
## KIMS_38r 0.31      0.10 0.90  1
## KIMS_39r 0.27      0.07 0.93  1
## 
## With Sums of squares  of:
##   g F1* 
## 7.8 0.0 
## 
## general/max  10257299580827460   max/min =   1
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 702  and the fit is  13.55 
## The number of observations was  434  with Chi Square =  5670.49  with prob <  0
## The root mean square of the residuals is  0.18 
## The df corrected root mean square of the residuals is  0.18
## RMSEA index =  0.128  and the 10 % confidence intervals are  0.125 0.131
## BIC =  1407.21
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 702  and the fit is  13.55 
## The number of observations was  434  with Chi Square =  5670.49  with prob <  0
## The root mean square of the residuals is  0.18 
## The df corrected root mean square of the residuals is  0.18 
## 
## RMSEA index =  0.128  and the 10 % confidence intervals are  0.125 0.131
## BIC =  1407.21 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.96   0
## Multiple R square of scores with factors      0.92   0
## Minimum correlation of factor score estimates 0.84  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.90 0.86
## Omega general for total scores and subscales  0.86 0.86
## Omega group for total scores and subscales    0.00 0.00

State Hostility

data %>% 
  select(SHS_1:SHS_21) %>% 
  omega(nfactors = 2)
## 
## Three factors are required for identification -- general factor loadings set to be equal. 
## Proceed with caution. 
## Think about redoing the analysis with alternative values of the 'option' setting.

## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.98 
## G.6:                   0.98 
## Omega Hierarchical:    0.82 
## Omega H asymptotic:    0.83 
## Omega Total            0.98 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##           g   F1*   F2*   h2   u2   p2
## SHS_1  0.73  0.43       0.71 0.29 0.74
## SHS_2  0.75  0.44       0.76 0.24 0.74
## SHS_3  0.66  0.48       0.68 0.32 0.65
## SHS_4  0.75  0.43       0.75 0.25 0.75
## SHS_5  0.72  0.32       0.64 0.36 0.82
## SHS_6  0.80  0.24  0.23 0.75 0.25 0.85
## SHS_7  0.74  0.34       0.66 0.34 0.82
## SHS_8  0.78  0.35       0.75 0.25 0.82
## SHS_9  0.76  0.48       0.81 0.19 0.71
## SHS_10 0.76  0.46       0.80 0.20 0.73
## SHS_11 0.74  0.28       0.65 0.35 0.85
## SHS_12 0.75  0.46       0.78 0.22 0.73
## SHS_13 0.82  0.33       0.80 0.20 0.84
## SHS_14 0.76  0.24  0.20 0.67 0.33 0.85
## SHS_15 0.79        0.34 0.75 0.25 0.83
## SHS_16 0.70        0.35 0.62 0.38 0.79
## SHS_17 0.78        0.47 0.83 0.17 0.73
## SHS_18 0.75        0.42 0.73 0.27 0.76
## SHS_19 0.75  0.30       0.67 0.33 0.84
## SHS_20 0.76  0.46       0.79 0.21 0.74
## SHS_21 0.76  0.36       0.71 0.29 0.81
## 
## With Sums of squares  of:
##     g   F1*   F2* 
## 11.93  2.53  0.84 
## 
## general/max  4.71   max/min =   3.01
## mean percent general =  0.78    with sd =  0.06 and cv of  0.07 
## Explained Common Variance of the general factor =  0.78 
## 
## The degrees of freedom are 169  and the fit is  1.47 
## The number of observations was  434  with Chi Square =  622.88  with prob <  0.000000000000000000000000000000000000000000000000000033
## The root mean square of the residuals is  0.02 
## The df corrected root mean square of the residuals is  0.03
## RMSEA index =  0.079  and the 10 % confidence intervals are  0.072 0.085
## BIC =  -403.47
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 189  and the fit is  4.32 
## The number of observations was  434  with Chi Square =  1833.36  with prob <  0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000072
## The root mean square of the residuals is  0.13 
## The df corrected root mean square of the residuals is  0.14 
## 
## RMSEA index =  0.142  and the 10 % confidence intervals are  0.136 0.148
## BIC =  685.55 
## 
## Measures of factor score adequacy             
##                                                  g   F1*   F2*
## Correlation of scores with factors            0.91  0.70  0.68
## Multiple R square of scores with factors      0.84  0.49  0.47
## Minimum correlation of factor score estimates 0.67 -0.02 -0.07
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*  F2*
## Omega total for total scores and subscales    0.98 0.97 0.91
## Omega general for total scores and subscales  0.82 0.78 0.71
## Omega group for total scores and subscales    0.14 0.19 0.20

PANAS

# PANAS_pos
data %>% 
  select(paste0("PANAS_", seq(1, 10, 2))) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Warning in cov2cor(t(w) %*% r %*% w): diag(.) had 0 or NA entries; non-finite
## result is doubtful
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.82 
## G.6:                   0.81 
## Omega Hierarchical:    0.83 
## Omega H asymptotic:    1 
## Omega Total            0.83 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##            g  F1*   h2   u2 p2
## PANAS_1 0.76      0.58 0.42  1
## PANAS_3 0.74      0.55 0.45  1
## PANAS_5 0.90      0.82 0.18  1
## PANAS_7 0.44      0.20 0.80  1
## PANAS_9 0.61      0.37 0.63  1
## 
## With Sums of squares  of:
##   g F1* 
## 2.5 0.0 
## 
## general/max  Inf   max/min =   NaN
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 5  and the fit is  0.11 
## The number of observations was  434  with Chi Square =  45.82  with prob <  0.0000000099
## The root mean square of the residuals is  0.06 
## The df corrected root mean square of the residuals is  0.09
## RMSEA index =  0.137  and the 10 % confidence intervals are  0.103 0.175
## BIC =  15.46
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 5  and the fit is  0.11 
## The number of observations was  434  with Chi Square =  45.82  with prob <  0.0000000099
## The root mean square of the residuals is  0.06 
## The df corrected root mean square of the residuals is  0.09 
## 
## RMSEA index =  0.137  and the 10 % confidence intervals are  0.103 0.175
## BIC =  15.46 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.94   0
## Multiple R square of scores with factors      0.89   0
## Minimum correlation of factor score estimates 0.77  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.83 0.83
## Omega general for total scores and subscales  0.83 0.83
## Omega group for total scores and subscales    0.00 0.00
# PANAS_neg
data %>% 
  select(paste0("PANAS_", seq(2, 10, 2))) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor

## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## diag(.) had 0 or NA entries; non-finite result is doubtful
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.92 
## G.6:                   0.92 
## Omega Hierarchical:    0.92 
## Omega H asymptotic:    1 
## Omega Total            0.92 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##             g  F1*   h2   u2 p2
## PANAS_2  0.86      0.74 0.26  1
## PANAS_4  0.81      0.66 0.34  1
## PANAS_6  0.82      0.66 0.34  1
## PANAS_8  0.88      0.78 0.22  1
## PANAS_10 0.83      0.69 0.31  1
## 
## With Sums of squares  of:
##   g F1* 
## 3.5 0.0 
## 
## general/max  Inf   max/min =   NaN
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 5  and the fit is  0.24 
## The number of observations was  434  with Chi Square =  101.85  with prob <  0.000000000000000000022
## The root mean square of the residuals is  0.05 
## The df corrected root mean square of the residuals is  0.07
## RMSEA index =  0.211  and the 10 % confidence intervals are  0.177 0.248
## BIC =  71.48
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 5  and the fit is  0.24 
## The number of observations was  434  with Chi Square =  101.85  with prob <  0.000000000000000000022
## The root mean square of the residuals is  0.05 
## The df corrected root mean square of the residuals is  0.07 
## 
## RMSEA index =  0.211  and the 10 % confidence intervals are  0.177 0.248
## BIC =  71.48 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.96   0
## Multiple R square of scores with factors      0.93   0
## Minimum correlation of factor score estimates 0.85  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.92 0.92
## Omega general for total scores and subscales  0.92 0.92
## Omega group for total scores and subscales    0.00 0.00

t-tests

In this section, we will: (a) test assumptions of normality, (b) transform variables violating assumptions, (c) test assumptions of homoscedasticity, (d) identify and winsorize outliers, and (e) conduct the t-tests.

Normality

lapply(col.list, function(x) 
  nice_normality(data, 
                 variable = x, 
                 title = x,
                 group = "condition",
                 shapiro = TRUE,
                 histogram = TRUE))
## [[1]]

## 
## [[2]]

## 
## [[3]]

## 
## [[4]]

## 
## [[5]]

## 
## [[6]]

## 
## [[7]]

## 
## [[8]]

## 
## [[9]]

## 
## [[10]]

## 
## [[11]]

## 
## [[12]]

## 
## [[13]]

Several variables are clearly skewed. Let’s apply transformations. But first, let’s deal with the working memory task, SOPT (Self-Ordered Pointing Task). It is clearly problematic.

Transformation

The function below transforms variables according to the best possible transformation (via the bestNormalize package), and also standardizes the variables.

predict_bestNormalize <- function(var) {
  x <- bestNormalize(var, standardize = FALSE, allow_orderNorm = FALSE)
  print(cur_column())
  print(x$chosen_transform)
  cat("\n")
  predict(x)
}

set.seed(100)
data <- data %>% 
  mutate(across(all_of(col.list), 
                predict_bestNormalize,
                .names = "{.col}.t"))
## [1] "blastintensity"
## I(x) Transformation with 434 nonmissing obs.
## 
## [1] "blastduration"
## I(x) Transformation with 434 nonmissing obs.
## 
## [1] "blastintensity.duration"
## Non-Standardized sqrt(x + a) Transformation with 434 nonmissing obs.:
##  Relevant statistics:
##  - a = 0 
##  - mean (before standardization) = 69.27661 
##  - sd (before standardization) = 31.3388 
## 
## [1] "KIMS"
## Non-Standardized Log_b(x + a) Transformation with 434 nonmissing obs.:
##  Relevant statistics:
##  - a = 0 
##  - b = 10 
##  - mean (before standardization) = 0.5285868 
##  - sd (before standardization) = 0.06011001 
## 
## [1] "BSCS"
## Non-Standardized Yeo-Johnson Transformation with 434 nonmissing obs.:
##  Estimated statistics:
##  - lambda = 1.496462 
##  - mean (before standardization) = 5.736007 
##  - sd (before standardization) = 1.73796 
## 
## [1] "BAQ"
## Non-Standardized sqrt(x + a) Transformation with 434 nonmissing obs.:
##  Relevant statistics:
##  - a = 0 
##  - mean (before standardization) = 1.738352 
##  - sd (before standardization) = 0.3124039 
## 
## [1] "SHS"
## Non-Standardized Box Cox Transformation with 434 nonmissing obs.:
##  Estimated statistics:
##  - lambda = -0.9999576 
##  - mean (before standardization) = 0.2769907 
##  - sd (before standardization) = 0.2598753 
## 
## [1] "SHS.mean"
## Non-Standardized Box Cox Transformation with 434 nonmissing obs.:
##  Estimated statistics:
##  - lambda = -0.9999576 
##  - mean (before standardization) = 0.244397 
##  - sd (before standardization) = 0.261915 
## 
## [1] "SHS.aggravation"
## Non-Standardized Box Cox Transformation with 434 nonmissing obs.:
##  Estimated statistics:
##  - lambda = -0.7758021 
##  - mean (before standardization) = 0.3430183 
##  - sd (before standardization) = 0.3039244 
## 
## [1] "PANAS_pos"
## Non-Standardized Box Cox Transformation with 434 nonmissing obs.:
##  Estimated statistics:
##  - lambda = 1.309279 
##  - mean (before standardization) = 4.799965 
##  - sd (before standardization) = 2.132036 
## 
## [1] "PANAS_neg"
## Non-Standardized sqrt(x + a) Transformation with 434 nonmissing obs.:
##  Relevant statistics:
##  - a = 0 
##  - mean (before standardization) = 1.395867 
##  - sd (before standardization) = 0.468339 
## 
## [1] "IAT"
## Non-Standardized Yeo-Johnson Transformation with 434 nonmissing obs.:
##  Estimated statistics:
##  - lambda = 1.213857 
##  - mean (before standardization) = 0.4599925 
##  - sd (before standardization) = 0.3868647 
## 
## [1] "SOPT"
## Non-Standardized Yeo-Johnson Transformation with 434 nonmissing obs.:
##  Estimated statistics:
##  - lambda = -0.1067761 
##  - mean (before standardization) = 2.178345 
##  - sd (before standardization) = 0.5632945
col.list <- paste0(col.list, ".t")

Note. The I(x) transformations above are actually not transformations, but a shorthand function for passing the data “as is”. Suggesting the package estimated the various attempted transformations did not improve normality in those cases, so no transformation is used. This only appears when standardize is set to FALSE. When set to TRUE, for those variables, it is actually center_scale(x), suggesting that the data are only CENTERED because they need no transformation (no need to be scaled), only to be centered.

Let’s check if normality was corrected.

# Group normality
lapply(col.list, function(x) 
  nice_normality(data, 
                 x, 
                 "condition",
                 shapiro = TRUE,
                 title = x,
                 histogram = TRUE))
## [[1]]

## 
## [[2]]

## 
## [[3]]

## 
## [[4]]

## 
## [[5]]

## 
## [[6]]

## 
## [[7]]

## 
## [[8]]

## 
## [[9]]

## 
## [[10]]

## 
## [[11]]

## 
## [[12]]

## 
## [[13]]

Looks rather reasonable now, though not perfect (fortunately t-tests are quite robust against violations of normality).

We can now resume with the next step: checking variance.

Homoscedasticity

# Plotting variance
plots(lapply(col.list, function(x) {
  nice_varplot(data, x, group = "condition")
  }),
  n_columns = 3)

Variance looks good. No group has four times the variance of any other group. We can now resume with checking outliers.

Outliers

We check outliers visually with the plot_outliers function, which draws red lines at +/- 3 median absolute deviations.

plots(lapply(col.list, function(x) {
  plot_outliers(data, x, group = "condition", ytitle = x, binwidth = 0.15)
  }),
  n_columns = 2)

There are some outliers, but nothing unreasonable. Let’s still check with the 3 median absolute deviations (MAD) method.

data %>% 
  filter(condition == "Control") %>% 
  find_mad(col.list, criteria = 3)
## 49 outlier(s) based on 3 median absolute deviations for variable(s): 
##  blastintensity.t, blastduration.t, blastintensity.duration.t, KIMS.t, BSCS.t, BAQ.t, SHS.t, SHS.mean.t, SHS.aggravation.t, PANAS_pos.t, PANAS_neg.t, IAT.t, SOPT.t 
## 
## The following participants were considered outliers for more than one variable: 
## 
##    Row n
## 1    4 3
## 2   21 2
## 3   45 3
## 4   59 2
## 5   71 2
## 6   72 2
## 7   98 2
## 8  149 2
## 9  169 2
## 10 178 2
## 11 189 3
## 12 194 2
## 13 199 2
## 
## Outliers per variable: 
## 
## $SHS.mean.t
##    Row SHS.mean.t_mad
## 1    4       3.419054
## 2   45       3.079263
## 3   59       3.004612
## 4   62       3.372535
## 5   71       3.322570
## 6   72       3.268761
## 7   98       3.179813
## 8  100       3.147693
## 9  110       3.296173
## 10 148       3.372535
## 11 149       3.042767
## 12 169       3.419054
## 13 178       3.042767
## 14 189       3.268761
## 15 194       3.147693
## 16 199       3.004612
## 
## $PANAS_neg.t
##    Row PANAS_neg.t_mad
## 1    4        4.810718
## 2    8        3.875967
## 3    9        3.709650
## 4   21        3.539830
## 5   45        4.957393
## 6   49        4.661657
## 7   59        3.709650
## 8   64        3.366276
## 9   71        4.038990
## 10  72        4.198906
## 11  76        3.875967
## 12  92        3.188731
## 13  97        4.355888
## 14  98        4.510091
## 15 104        4.038990
## 16 133        3.709650
## 17 149        5.244022
## 18 156        3.366276
## 19 158        3.366276
## 20 160        5.384174
## 21 166        4.038990
## 22 169        4.510091
## 23 171        3.539830
## 24 178        4.038990
## 25 187        3.188731
## 26 189        3.539830
## 27 194        4.038990
## 28 199        4.355888
## 29 205        4.810718
## 30 211        4.198906
## 
## $IAT.t
##   Row IAT.t_mad
## 1  60  3.096102
## 
## $SOPT.t
##    Row SOPT.t_mad
## 1    4   3.695020
## 2   21   3.239544
## 3   33  -3.895389
## 4   36  -3.895389
## 5   43   3.678657
## 6   45  -3.895389
## 7   57   3.628411
## 8   78   3.695020
## 9   79  -3.895389
## 10  84  -3.895389
## 11  94   3.662104
## 12  96   3.695020
## 13 111   3.678657
## 14 139   3.695020
## 15 147   3.662104
## 16 155  -3.895389
## 17 189   3.695020
## 18 208   3.695020
data %>% 
  filter(condition == "Mindfulness") %>% 
  find_mad(col.list, criteria = 3)
## 43 outlier(s) based on 3 median absolute deviations for variable(s): 
##  blastintensity.t, blastduration.t, blastintensity.duration.t, KIMS.t, BSCS.t, BAQ.t, SHS.t, SHS.mean.t, SHS.aggravation.t, PANAS_pos.t, PANAS_neg.t, IAT.t, SOPT.t 
## 
## The following participants were considered outliers for more than one variable: 
## 
##   Row n
## 1  84 2
## 2 106 2
## 3 162 2
## 
## Outliers per variable: 
## 
## $KIMS.t
##   Row KIMS.t_mad
## 1  73   3.061291
## 2 175  -3.149689
## 
## $PANAS_neg.t
##    Row PANAS_neg.t_mad
## 1   16        4.038990
## 2   17        4.038990
## 3   19        4.661657
## 4   28        3.875967
## 5   34        4.038990
## 6   45        3.006907
## 7   55        3.366276
## 8   57        4.510091
## 9   59        3.366276
## 10  62        4.355888
## 11  64        5.244022
## 12  81        3.539830
## 13  84        4.810718
## 14  85        3.875967
## 15  88        3.709650
## 16  93        3.366276
## 17  99        3.188731
## 18 106        5.244022
## 19 109        4.198906
## 20 111        3.006907
## 21 124        3.875967
## 22 136        4.957393
## 23 140        3.539830
## 24 144        3.006907
## 25 146        5.384174
## 26 148        3.006907
## 27 154        3.875967
## 28 162        4.198906
## 29 167        4.810718
## 30 172        3.709650
## 31 176        4.810718
## 32 178        3.188731
## 33 179        3.539830
## 34 193        3.709650
## 35 197        3.875967
## 36 205        4.355888
## 
## $IAT.t
##   Row IAT.t_mad
## 1   1  3.412721
## 
## $SOPT.t
##   Row SOPT.t_mad
## 1  84   3.183769
## 2 106   3.126376
## 3 150  -3.356414
## 4 158  -4.837137
## 5 162  -3.356414
## 6 202   3.183769
## 7 209  -3.356414

There are 49 outliers after our transformations in the control group, and 44 in the mindfulness group. That seems to be due mostly to the extreme positive skew for the negative affect scale of the PANAS.

Multivariate outliers

For multivariate outliers, it is recommended to use the Minimum Covariance Determinant, a robust version of the Mahalanobis distance (MCD, Leys et al., 2019).

Leys, C., Delacre, M., Mora, Y. L., Lakens, D., & Ley, C. (2019). How to classify, detect, and manage univariate and multivariate outliers, with emphasis on pre-registration. International Review of Social Psychology, 32(1).

x <- check_outliers(na.omit(data[col.list]), method = "mcd")
x
## 95 outliers detected: cases 3, 6, 10, 19, 21, 26, 28, 52, 53, 56, 70,
##   72, 73, 76, 79, 87, 93, 96, 103, 114, 126, 127, 138, 162, 167, 168, 169,
##   170, 174, 176, 177, 181, 186, 187, 188, 192, 194, 204, 207, 208, 215,
##   219, 222, 232, 234, 235, 236, 238, 240, 244, 248, 256, 258, 271, 278,
##   279, 287, 291, 292, 297, 300, 311, 314, 318, 327, 329, 332, 334, 337,
##   347, 349, 350, 351, 352, 353, 355, 356, 360, 371, 383, 386, 388, 394,
##   395, 397, 400, 401, 402, 406, 407, 410, 417, 423, 427, 431.
## - Based on the following method and threshold: mcd (34.528).
## - For variables: blastintensity.t, blastduration.t,
##   blastintensity.duration.t, KIMS.t, BSCS.t, BAQ.t, SHS.t, SHS.mean.t,
##   SHS.aggravation.t, PANAS_pos.t, PANAS_neg.t, IAT.t, SOPT.t.
plot(x)

There are 92 multivariate outliers according to the MCD method.

This time, instead of winsorizing outliers, we attempt to exclude them to see if it makes any difference. We only exclude multivariate outliers, not univariate ones.

Winsorization

Visual assessment and the MAD method confirm we have some outlier values. We could ignore them but because they could have disproportionate influence on the models, one recommendation is to winsorize them by bringing the values at 3 SD. Instead of using the standard deviation around the mean, however, we use the absolute deviation around the median, as it is more robust to extreme observations. For a discussion, see:

Leys, C., Klein, O., Bernard, P., & Licata, L. (2013). Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median. Journal of Experimental Social Psychology, 49(4), 764–766. https://doi.org/10.1016/j.jesp.2013.03.013

# Winsorize variables of interest with MAD
data <- data %>% 
  group_by(condition) %>% 
  mutate(across(all_of(col.list), 
                winsorize_mad,
                .names = "{.col}.w")) %>% 
  ungroup()

col.list <- paste0(col.list, ".w")

Standardization

We can now standardize our variables.

data <- data %>%
  mutate(across(all_of(col.list), standardize, .names = "{col}.s"))

# Update col.list
col.list <- paste0(col.list, ".s")

We are now ready to compare the group condition (Control vs. Mindfulness Priming) across our different variables with the t-tests.

t-tests

nice_t_test(data, 
            response = col.list, 
            group = "condition") %>% 
  nice_table(highlight = 0.10, width = .80)
## Using Welch t-test (base R's default; cf. https://doi.org/10.5334/irsp.82).
## For the Student t-test, use `var.equal = TRUE`. 
##  

Interpretation: There seems to be a preexisting difference in IAT levels: the mindfulness group seems to have higher implicit aggression than the control group.

Violin plots

Intensity * Duration

nice_violin(data, 
            group = "condition", 
            response = "blastintensity.duration.t.w.s",
            comp1 = 1,
            comp2 = 2,
            obs = TRUE,
            has.d = TRUE,
            d.y = 1)

Blast Intensity

nice_violin(data, 
            group = "condition", 
            response = "blastintensity.t.w.s",
            comp1 = 1,
            comp2 = 2,
            obs = TRUE,
            has.d = TRUE,
            d.y = 1)

Blast Duration

nice_violin(data, 
            group = "condition", 
            response = "blastduration.t.w.s",
            comp1 = 1,
            comp2 = 2,
            obs = TRUE,
            has.d = TRUE,
            d.y = 1)

Means, SD

Let’s extract the means and standard deviations for journal reporting.

Intensity * Duration

data %>% 
    group_by(condition) %>% 
    summarize(M = mean(blastintensity.duration),
              SD = sd(blastintensity.duration),
              N = n()) %>% 
  nice_table(width = 0.40)

Blast Intensity

data %>% 
    group_by(condition) %>% 
    summarize(M = mean(blastintensity),
              SD = sd(blastintensity),
              N = n()) %>% 
  nice_table(width = 0.40)

Blast Duration

data %>% 
    group_by(condition) %>% 
    summarize(M = mean(blastduration),
              SD = sd(blastduration),
              N = n()) %>% 
  nice_table(width = 0.40)

Moderations (confirmatory)

Let’s see if our variables don’t interact together with our experimental condition. But first, let’s test the models assumptions.

Assumptions

Intensity * Duration

big.mod3 <- lm(blastintensity.duration.t.w.s ~ condition_dum*BSCS.t.w.s
                 # + condition_dum*KIMS.t.w.s + condition_dum*BAQ.t.w.s
               , data = data, na.action="na.exclude")
check_model(big.mod3)

Blast Intensity

big.mod1 <- lm(blastintensity.t.w.s ~ condition_dum*BSCS.t.w.s
                 # condition_dum*KIMS.t.w.s + condition_dum*BAQ.t.w.s
               , data = data, na.action="na.exclude")
check_model(big.mod1)

Blast Duration

big.mod2 <- lm(blastduration.t.w.s ~ condition_dum*BSCS.t.w.s
                 # + condition_dum*KIMS.t.w.s + condition_dum*BAQ.t.w.s
               , data = data, na.action="na.exclude")
check_model(big.mod2)

State Hostility

big.mod4 <- lm(SHS.t.w.s ~ condition_dum*BSCS.t.w.s
                 # + condition_dum*KIMS.t.w.s + condition_dum*BAQ.t.w.s
               , data = data, na.action="na.exclude")
check_model(big.mod4)

big.mod5 <- lm(SHS.mean.t.w.s ~ condition_dum*BSCS.t.w.s
                 # + condition_dum*KIMS.t.w.s + condition_dum*BAQ.t.w.s
               , data = data, na.action="na.exclude")
check_model(big.mod5)

big.mod6 <- lm(SHS.aggravation.t.w.s ~ condition_dum*BSCS.t.w.s
                 # + condition_dum*KIMS.t.w.s + condition_dum*BAQ.t.w.s
               , data = data, na.action="na.exclude")
check_model(big.mod6)

Affect

big.mod7 <- lm(PANAS_pos.t.w.s ~ condition_dum*BSCS.t.w.s
                 # + condition_dum*KIMS.t.w.s + condition_dum*BAQ.t.w.s
               , data = data, na.action="na.exclude")
check_model(big.mod7)

big.mod8 <- lm(PANAS_neg.t.w.s ~ condition_dum*BSCS.t.w.s
                 # + condition_dum*KIMS.t.w.s + condition_dum*BAQ.t.w.s
               , data = data, na.action="na.exclude")
check_model(big.mod8)

All the models assumptions look pretty good overall actually, even with all these variables. The lines for linearity and homoscedasticity are a bit skewed but nothing too crazy. Let’s now look at the results.

Moderations

Intensity * Duration

big.mod3 %>% 
  nice_lm() %>% 
  nice_table(highlight = TRUE)

Blast Intensity

big.mod1 %>% 
  nice_lm() %>% 
  nice_table(highlight = TRUE)

Blast Duration

big.mod2 %>% 
  nice_lm() %>% 
  nice_table(highlight = TRUE)

State Hostility

list(big.mod4, big.mod5, big.mod6) %>% 
  nice_lm() %>% 
  nice_table(highlight = TRUE)

Affect

list(big.mod7, big.mod8) %>% 
  nice_lm() %>% 
  nice_table(highlight = TRUE)

Interpretation: The condition by trait self-control (brief self-control scale, BSCS) interaction does not come up.

Interaction plots

Let’s plot the main interaction(s).

Intensity * Duration

interact_plot(big.mod3, pred = "condition_dum", modx = "BSCS.t.w.s", 
              modxvals = NULL, interval = TRUE, x.label = "condition_dum", 
              pred.labels = c("Control", "Mindfulness"),
              legend.main = "Trait Self-Control")

Blast Intensity

interact_plot(big.mod1, pred = "condition_dum", modx = "BSCS.t.w.s", 
              modxvals = NULL, interval = TRUE, x.label = "condition_dum", 
              pred.labels = c("Control", "Mindfulness"),
              legend.main = "Trait Self-Control")

Blast Duration

interact_plot(big.mod2, pred = "condition_dum", modx = "BSCS.t.w.s", 
              modxvals = NULL, interval = TRUE, x.label = "condition_dum", 
              pred.labels = c("Control", "Mindfulness"),
              legend.main = "Trait Self-Control")

Interpretation: It appears that there are no interactions.

Simple slopes

Let’s look at the simple slopes now (only for the significant interaction).

Intensity * Duration

big.mod3 %>%
  nice_lm_slopes(predictor = "condition_dum",
                 moderator = "BSCS.t.w.s") %>% 
  nice_table(highlight = TRUE)

Blast Intensity

big.mod1 %>%
  nice_lm_slopes(predictor = "condition_dum",
                 moderator = "BSCS.t.w.s") %>% 
  nice_table(highlight = TRUE)

Blast Duration

big.mod2 %>%
  nice_lm_slopes(predictor = "condition_dum",
                 moderator = "BSCS.t.w.s") %>% 
  nice_table(highlight = TRUE)

Interpretation: There seems to have no effect of priming mindfulness on blast intensity as a function of self-control.

Moderations (exploratory)

Let’s see if our variables don’t interact together with our experimental condition. But first, let’s test the models assumptions.

Assumptions

Intensity * Duration

big.mod3 <- lm(blastintensity.duration.t.w.s ~ condition_dum*KIMS.t.w.s +
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s +
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s
               , data = data, na.action="na.exclude")
check_model(big.mod3)

Blast Intensity

big.mod1 <- lm(blastintensity.t.w.s ~ condition_dum*KIMS.t.w.s +
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s +
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s
               , data = data, na.action="na.exclude")
check_model(big.mod1)

Blast Duration

big.mod2 <- lm(blastduration.t.w.s ~ condition_dum*KIMS.t.w.s +
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s +
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s
               , data = data, na.action="na.exclude")
check_model(big.mod2)

State Hostility

big.mod4 <- lm(SHS.t.w.s ~ condition_dum*KIMS.t.w.s +
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s +
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s
               , data = data, na.action="na.exclude")
check_model(big.mod4)

big.mod5 <- lm(SHS.mean.t.w.s ~ condition_dum*KIMS.t.w.s +
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s +
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s
               , data = data, na.action="na.exclude")
check_model(big.mod5)

big.mod6 <- lm(SHS.aggravation.t.w.s ~ condition_dum*KIMS.t.w.s +
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s +
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s
               , data = data, na.action="na.exclude")
check_model(big.mod6)

Affect

big.mod7 <- lm(PANAS_pos.t.w.s ~ condition_dum*KIMS.t.w.s +
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s +
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s
               , data = data, na.action="na.exclude")
check_model(big.mod7)

big.mod8 <- lm(PANAS_neg.t.w.s ~ condition_dum*KIMS.t.w.s +
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s +
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s
               , data = data, na.action="na.exclude")
check_model(big.mod8)

All the models assumptions look pretty good overall actually, even with all these variables. The lines for linearity and homoscedasticity are a bit skewed but nothing too crazy. Let’s now look at the results.

Moderations

Intensity * Duration

big.mod3 %>% 
  nice_lm() %>% 
  nice_table(highlight = TRUE)

Blast Intensity

big.mod1 %>% 
  nice_lm() %>% 
  nice_table(highlight = TRUE)

Blast Duration

big.mod2 %>% 
  nice_lm() %>% 
  nice_table(highlight = TRUE)

State Hostility

list(big.mod4, big.mod5, big.mod6) %>% 
  nice_lm() %>% 
  nice_table(highlight = TRUE)

Affect

list(big.mod7, big.mod8) %>% 
  nice_lm() %>% 
  nice_table(highlight = TRUE)

Interpretation: The condition by trait self-control (brief self-control scale, BSCS) interaction does not come up.

Interaction plots

Let’s plot the main significant interaction(s).

Intensity * Duration

interact_plot(big.mod3, pred = "condition_dum", modx = "BSCS.t.w.s", 
              modxvals = NULL, interval = TRUE, x.label = "condition_dum", 
              pred.labels = c("Control", "Mindfulness"),
              legend.main = "Trait Self-Control")

Blast Intensity

interact_plot(big.mod1, pred = "condition_dum", modx = "BSCS.t.w.s", 
              modxvals = NULL, interval = TRUE, x.label = "condition_dum", 
              pred.labels = c("Control", "Mindfulness"),
              legend.main = "Trait Self-Control")

Blast Duration

interact_plot(big.mod2, pred = "condition_dum", modx = "BSCS.t.w.s", 
              modxvals = NULL, interval = TRUE, x.label = "condition_dum", 
              pred.labels = c("Control", "Mindfulness"),
              legend.main = "Trait Self-Control")

State Hostility

interact_plot(big.mod4, pred = "condition_dum", modx = "SOPT.t.w.s", 
              modxvals = NULL, interval = TRUE, x.label = "condition_dum", 
              pred.labels = c("Control", "Mindfulness"),
              legend.main = "Working Memory")

Negative Affect

interact_plot(big.mod8, pred = "condition_dum", modx = "SOPT.t.w.s", 
              modxvals = NULL, interval = TRUE, x.label = "condition_dum", 
              pred.labels = c("Control", "Mindfulness"),
              legend.main = "Working Memory")

Interpretation: The only interaction of interest here is the last two tabs, for state hostility and negative affect. Essentially, people in the mindfulness condition, who also have high working memory, had higher state hostility and negative affect than those in the control condition or those with low working memory. Vice-versa, people in the mindfulness condition, but with low working memory, had lower state hostility and negative affect. Relative to our original hypotheses, it seems like working memory has just replaced self-control.

Simple slopes

Let’s look at the simple slopes now (only for the significant interaction).

Intensity * Duration

big.mod3 %>%
  nice_lm_slopes(predictor = "condition_dum",
                 moderator = "BSCS.t.w.s") %>% 
  nice_table(highlight = TRUE)

Blast Intensity

big.mod1 %>%
  nice_lm_slopes(predictor = "condition_dum",
                 moderator = "BSCS.t.w.s") %>% 
  nice_table(highlight = TRUE)

Blast Duration

big.mod2 %>%
  nice_lm_slopes(predictor = "condition_dum",
                 moderator = "BSCS.t.w.s") %>% 
  nice_table(highlight = TRUE)

State Hostility

big.mod4 %>%
  nice_lm_slopes(predictor = "condition_dum",
                 moderator = "SOPT.t.w.s") %>% 
  nice_table(highlight = TRUE)

Negative Affect

big.mod8 %>%
  nice_lm_slopes(predictor = "condition_dum",
                 moderator = "SOPT.t.w.s") %>% 
  nice_table(highlight = TRUE)

Interpretation: There seems to have no effect of priming mindfulness on blast intensity as a function of self-control.

Conclusions

Based on the results, it seems that the predicted interaction between self-control and the priming mindfulness manipulation does not come up. The exploratory analyses including the larger models also did not show the expected effects.

Previously, the results revealed an interaction between the condition and working memory (SOPT) on state hostility, its two subscales, and negative affect. However, this was due to an error, as the non-transformed, non-winsorized, non-standardized variable was used. Now that we use the proper variable, no interaction comes up.

Package References

report::cite_packages(sessionInfo())
  • Analytics R, Weston S (2022). iterators: Provides Iterator Construct. R package version 1.0.14, https://CRAN.R-project.org/package=iterators.
  • Corporation M, Weston S (2022). doParallel: Foreach Parallel Adaptor for the ‘parallel’ Package. R package version 1.0.17, https://CRAN.R-project.org/package=doParallel.
  • Long JA (2019). interactions: Comprehensive, User-Friendly Toolkit for Probing Interactions. R package version 1.1.0, https://cran.r-project.org/package=interactions.
  • Lüdecke D, Ben-Shachar M, Patil I, Waggoner P, Makowski D (2021). “performance: An R Package for Assessment, Comparison and Testing of Statistical Models.” Journal of Open Source Software, 6(60), 3139. doi:10.21105/joss.03139 https://doi.org/10.21105/joss.03139.
  • Lüdecke D, Patil I, Ben-Shachar M, Wiernik B, Waggoner P, Makowski D (2021). “see: An R Package for Visualizing Statistical Models.” Journal of Open Source Software, 6(64), 3393. doi:10.21105/joss.03393 https://doi.org/10.21105/joss.03393.
  • Makowski D, Lüdecke D, Patil I, Thériault R, Ben-Shachar M, Wiernik B (2023). “Automated Results Reporting as a Practical Tool to Improve Reproducibility and Methodological Best Practices Adoption.” CRAN. https://easystats.github.io/report/.
  • Microsoft, Weston S (2022). foreach: Provides Foreach Looping Construct. R package version 1.5.2, https://CRAN.R-project.org/package=foreach.
  • Patil I, Makowski D, Ben-Shachar M, Wiernik B, Bacher E, Lüdecke D (2022). “datawizard: An R Package for Easy Data Preparation and Statistical Transformations.” Journal of Open Source Software, 7(78), 4684. doi:10.21105/joss.04684 https://doi.org/10.21105/joss.04684.
  • Peterson RA (2021). “Finding Optimal Normalizing Transformations via bestNormalize.” The R Journal, 13(1), 310-329. doi:10.32614/RJ-2021-041 https://doi.org/10.32614/RJ-2021-041. Peterson RA, Cavanaugh JE (2020). “Ordered quantile normalization: a semiparametric transformation built for the cross-validation era.” Journal of Applied Statistics, 47(13-15), 2312-2327. doi:10.1080/02664763.2019.1630372 https://doi.org/10.1080/02664763.2019.1630372.
  • R Core Team (2022). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.
  • Revelle W (2022). psych: Procedures for Psychological, Psychometric, and Personality Research. Northwestern University, Evanston, Illinois. R package version 2.2.9, https://CRAN.R-project.org/package=psych.
  • Stekhoven DJ (2022). missForest: Nonparametric Missing Value Imputation using Random Forest. R package version 1.5. Stekhoven DJ, Buehlmann P (2012). “MissForest - non-parametric missing value imputation for mixed-type data.” Bioinformatics, 28(1), 112-118.
  • Thériault R (2022). “rempsyc: Convenience Functions for Psychology.” (R package version x86_64-w64-mingw32) [Computer software]. (R package version x86_64) [Computer software]. (R package version mingw32) [Computer software]. (R package version ucrt) [Computer software]. (R package version x86_64, mingw32) [Computer software]. (R package version ) [Computer software]. (R package version 4) [Computer software]. (R package version 2.2) [Computer software]. (R package version 2022) [Computer software]. (R package version 10) [Computer software]. (R package version 31) [Computer software]. (R package version 83211) [Computer software]. (R package version R) [Computer software]. (R package version R version 4.2.2 (2022-10-31 ucrt)) [Computer software]. (R package version Innocent and Trusting) [Computer software]., https://rempsyc.remi-theriault.com.
  • Tierney N (2017). “visdat: Visualising Whole Data Frames.” JOSS, 2(16), 355. doi:10.21105/joss.00355 https://doi.org/10.21105/joss.00355, http://dx.doi.org/10.21105/joss.00355.
  • Tierney N, Cook D, McBain M, Fay C (2021). naniar: Data Structures, Summaries, and Visualisations for Missing Data. R package version 0.6.1, https://CRAN.R-project.org/package=naniar.
  • Wickham H, François R, Henry L, Müller K, Vaughan D (2023). dplyr: A Grammar of Data Manipulation. R package version 1.1.0, https://CRAN.R-project.org/package=dplyr.
LS0tDQp0aXRsZTogJyoqUHJpbWluZyBNaW5kZnVsbmVzcyBQcm9qZWN0IDMqKicNCnN1YnRpdGxlOiBDb21wYXJpc29uICYgYW5hbHlzaXMgcmVwb3J0DQphdXRob3I6ICJSw6ltaSBUaMOpcmlhdWx0Ig0KZGF0ZTogImByIGZvcm1hdChTeXMuRGF0ZSgpKWAiDQpvdXRwdXQ6DQogIGh0bWxfZG9jdW1lbnQ6DQogICAgdGhlbWU6IGNlcnVsZWFuDQogICAgaGlnaGxpZ2h0OiBweWdtZW50cw0KICAgIHRvYzogeWVzDQogICAgdG9jX2RlcHRoOiAzDQogICAgdG9jX2Zsb2F0OiB5ZXMNCiAgICBudW1iZXJfc2VjdGlvbnM6IG5vDQogICAgZGZfcHJpbnQ6IGthYmxlDQogICAgY29kZV9mb2xkaW5nOiBzaG93ICMgb3I6IGhpZGUNCiAgICBjb2RlX2Rvd25sb2FkOiB5ZXMNCiAgICBhbmNob3Jfc2VjdGlvbnM6DQogICAgICBzdHlsZTogc3ltYm9sDQotLS0NCg0KYGBge3Igc2V0dXAsIHdhcm5pbmc9RkFMU0UsIG1lc3NhZ2U9VFJVRSwgaW5jbHVkZT1GQUxTRX0NCmZhc3QgPC0gRkFMU0UgICMgTWFrZSB0aGlzIHRydWUgdG8gc2tpcCB0aGUgY2h1bmtzDQpgYGANCg0KYGBge3Iga2xpcHB5LCBlY2hvPUZBTFNFLCBpbmNsdWRlPVRSVUV9DQprbGlwcHk6OmtsaXBweShwb3NpdGlvbiA9IGMoJ3RvcCcsICdyaWdodCcpKQ0KYGBgDQoNCiMgSW50cm9kdWN0aW9uDQoNClRoaXMgcmVwb3J0IGRlc2NyaWJlcyB0aGUgcmVzdWx0cyBvZiBhIHByZXJlZ2lzdGVyZWQgc3R1ZHkgYXZhaWxhYmxlIGF0OiBodHRwczovL29zZi5pby93NDZyOS4NCg0KLS0tDQpOb3RlIGFsc28gdGhhdCB0aGlzIGRhdGEgaGFzIGJlZW4gY2xlYW5lZCBiZWZvcmVoYW5kLiBGaXZlIGRhdGFzZXRzIHdlcmUgbWVyZ2VkIChqb2luZWQpIHRocm91Z2ggYW4gaW5uZXIgam9pbuKAlDMgUXVhbHRyaWNzIHN1cnZleXMgYW5kIDIgSW5xdWlzaXQgdGFza3PigJRzbyBhcyB0byBrZWVwIG9ubHkgcGFydGljaXBhbnRzIHdobyBhdCBsZWFzdCBwYXJ0aWNpcGF0ZWQgYXQgZWFjaCBzdGVwIG9mIHRoZSBzdHVkeS4gTWlzc2luZyBkYXRhIHdpbGwgYmUgaW1wdXRlZCBsYXRlciBvbi4gRHVwbGljYXRlcyB3ZXJlIGFkZHJlc3NlZCB3aXRoIHRoZSBgcmVtcHN5Yzo6YmVzdF9kdXBsaWNhdGVgIGZ1bmN0aW9uLCB3aGljaCBrZWVwcyB0aGUgZHVwbGljYXRlIHdpdGggdGhlIGxlYXN0IGFtb3VudCBvZiBtaXNzaW5nIHZhbHVlcywgYW5kIGluIGNhc2Ugb2YgdGllcywgdGFrZXMgdGhlIGZpcnN0IG9jY3VycmVuY2UuDQoNCg0KIyBQYWNrYWdlcyAmIERhdGENCg0KIyMgUGFja2FnZXMNCg0KYGBge3Igd2FybmluZz1GQUxTRSwgbWVzc2FnZT1GQUxTRSwgcmVzdWx0cz0nYXNpcyd9DQpsaWJyYXJ5KHJlbXBzeWMpDQpsaWJyYXJ5KGRwbHlyKQ0KbGlicmFyeShpbnRlcmFjdGlvbnMpDQpsaWJyYXJ5KHBlcmZvcm1hbmNlKQ0KbGlicmFyeShzZWUpDQpsaWJyYXJ5KHJlcG9ydCkNCmxpYnJhcnkoZGF0YXdpemFyZCkNCmxpYnJhcnkoYmVzdE5vcm1hbGl6ZSkNCmxpYnJhcnkocHN5Y2gpDQpsaWJyYXJ5KHZpc2RhdCkNCmxpYnJhcnkobWlzc0ZvcmVzdCkNCmxpYnJhcnkoZG9QYXJhbGxlbCkNCg0Kc3VtbWFyeShyZXBvcnQoc2Vzc2lvbkluZm8oKSkpDQpgYGANCg0KIyMgRGF0YQ0KDQpgYGB7ciB3YXJuaW5nPUZBTFNFLCBtZXNzYWdlPVRSVUUsIHJlc3VsdHM9J2FzaXMnfQ0KIyBSZWFkIGRhdGENCmRhdGEgPC0gcmVhZC50YWJsZSgiZGF0YS9mdWxsZGF0YXNldC50eHQiLCBzZXAgPSAiXHQiLCBoZWFkZXIgPSBUUlVFKQ0KDQojIENvZGUgZ3JvdXAgdmFyaWFibGUgYXMgZmFjdG9yDQpkYXRhIDwtIGRhdGEgJT4lIA0KICBtdXRhdGUoY29uZGl0aW9uX2R1bSA9IGlmZWxzZShjb25kaXRpb24gPT0gIk1pbmRmdWxuZXNzIiwgMSwgMCksDQogICAgICAgICBjb25kaXRpb24gPSBhcy5mYWN0b3IoY29uZGl0aW9uKSkNCiMgRHVtbXkgdmFyaWFibGUgKGluc3RlYWQgb2YgZmFjdG9yKSBpcyByZXF1aXJlZCBieSB0aGUgYGludGVyYWN0X3Bsb3QoKWAgZnVuY3Rpb24uLi4NCg0KY2F0KHJlcG9ydF9wYXJ0aWNpcGFudHMoZGF0YSwgdGhyZXNob2xkID0gMSkpDQoNCiMgQWxsb2NhdGlvbiByYXRpbw0KcmVwb3J0KGRhdGEkY29uZGl0aW9uKQ0KDQpgYGANCg0KIyMgUHJlcGFyYXRpb24NCg0KQXQgdGhpcyBzdGFnZSwgd2UgZGVmaW5lIGEgbGlzdCBvZiBvdXIgcmVsZXZhbnQgdmFyaWFibGVzLg0KDQpgYGB7ciB3YXJuaW5nPUZBTFNFLCBtZXNzYWdlPVRSVUUsIHJlc3VsdHM9J2FzaXMnfQ0KIyBNYWtlIGxpc3Qgb2YgRFZzDQpjb2wubGlzdCA8LSBjKCJibGFzdGludGVuc2l0eSIsICJibGFzdGR1cmF0aW9uIiwgImJsYXN0aW50ZW5zaXR5LmR1cmF0aW9uIiwNCiAgICAgICAgICAgICAgIktJTVMiLCAiQlNDUyIsICJCQVEiLCAiU0hTIiwgIlNIUy5tZWFuIiwgIlNIUy5hZ2dyYXZhdGlvbiIsDQogICAgICAgICAgICAgICJQQU5BU19wb3MiLCAiUEFOQVNfbmVnIiwgIklBVCIsICJTT1BUIikNCg0KYGBgDQoNCiMgRGF0YSBjbGVhbmluZw0KDQpgYGB7ciBjaGlsZD1pZiAoZmFzdCA9PSBGQUxTRSkgJzBfY2xlYW5pbmcuUm1kJywgZXZhbCA9IFRSVUV9DQpgYGANCg0KIyB0LXRlc3RzDQoNCmBgYHtyIGNoaWxkPWlmIChmYXN0ID09IEZBTFNFKSAnMV90X3Rlc3RzLlJtZCcsIGV2YWwgPSBUUlVFfQ0KYGBgDQoNCiMgTW9kZXJhdGlvbnMgKGNvbmZpcm1hdG9yeSkNCg0KYGBge3IgY2hpbGQ9aWYgKGZhc3QgPT0gRkFMU0UpICcyX21vZGVyYXRpb25zLlJtZCcsIGV2YWwgPSBUUlVFfQ0KYGBgDQoNCiMgTW9kZXJhdGlvbnMgKGV4cGxvcmF0b3J5KQ0KDQpgYGB7ciBjaGlsZD1pZiAoZmFzdCA9PSBGQUxTRSkgJzNfbW9kZXJhdGlvbnMyLlJtZCcsIGV2YWwgPSBUUlVFfQ0KYGBgDQoNCiMgQ29uY2x1c2lvbnMNCg0KQmFzZWQgb24gdGhlIHJlc3VsdHMsIGl0IHNlZW1zIHRoYXQgdGhlIHByZWRpY3RlZCBpbnRlcmFjdGlvbiBiZXR3ZWVuIHNlbGYtY29udHJvbCBhbmQgdGhlIHByaW1pbmcgbWluZGZ1bG5lc3MgbWFuaXB1bGF0aW9uIGRvZXMgbm90IGNvbWUgdXAuIFRoZSBleHBsb3JhdG9yeSBhbmFseXNlcyBpbmNsdWRpbmcgdGhlIGxhcmdlciBtb2RlbHMgYWxzbyBkaWQgbm90IHNob3cgdGhlIGV4cGVjdGVkIGVmZmVjdHMuDQoNCg0KUHJldmlvdXNseSwgdGhlIHJlc3VsdHMgcmV2ZWFsZWQgYW4gaW50ZXJhY3Rpb24gYmV0d2VlbiB0aGUgY29uZGl0aW9uIGFuZCB3b3JraW5nIG1lbW9yeSAoU09QVCkgb24gc3RhdGUgaG9zdGlsaXR5LCBpdHMgdHdvIHN1YnNjYWxlcywgYW5kIG5lZ2F0aXZlIGFmZmVjdC4gSG93ZXZlciwgdGhpcyB3YXMgZHVlIHRvIGFuIGVycm9yLCBhcyB0aGUgbm9uLXRyYW5zZm9ybWVkLCBub24td2luc29yaXplZCwgbm9uLXN0YW5kYXJkaXplZCB2YXJpYWJsZSB3YXMgdXNlZC4gTm93IHRoYXQgd2UgdXNlIHRoZSBwcm9wZXIgdmFyaWFibGUsIG5vIGludGVyYWN0aW9uIGNvbWVzIHVwLg0KDQojIFBhY2thZ2UgUmVmZXJlbmNlcw0KDQpgYGB7ciB3YXJuaW5nPUZBTFNFLCBtZXNzYWdlPUZBTFNFLCByZXN1bHRzPSdhc2lzJ30NCnJlcG9ydDo6Y2l0ZV9wYWNrYWdlcyhzZXNzaW9uSW5mbygpKQ0KYGBgDQo=